106 research outputs found

    Simulation Design of a Tomato Picking Manipulator

    Get PDF
    Simulation is an important way to verify the feasibility of design parameters and schemes for robots. Through simulation, this paper analyzes the effectiveness of the design parameters selected for a tomato picking manipulator, and verifies the rationality of the manipulator in motion planning for tomato picking. Firstly, the basic parameters and workspace of the manipulator were determined based on the environment of a tomato greenhouse; the workspace of the lightweight manipulator was proved as suitable for the picking operation through MATLAB simulation. Next, the maximum theoretical torque of each joint of the manipulator was solved through analysis, the joint motors were selected reasonably, and SolidWorks simulation was performed to demonstrate the rationality of the material selected for the manipulator and the strength design of the joint connectors. After that, the trajectory control requirements of the manipulator in picking operation were determined in view of the operation environment, and the feasibility of trajectory planning was confirmed with MATLAB. Finally, a motion control system was designed for the manipulator, according to the end trajectory control requirements, followed by the manufacturing of a prototype. The prototype experiment shows that the proposed lightweight tomato picking manipulator boasts good kinematics performance, and basically meets the requirements of tomato picking operation: the manipulator takes an average of 21 s to pick a tomato, and achieves a success rate of 78.67%

    Towards Transaction as a Service

    Full text link
    This paper argues for decoupling transaction processing from existing two-layer cloud-native databases and making transaction processing as an independent service. By building a transaction as a service (TaaS) layer, the transaction processing can be independently scaled for high resource utilization and can be independently upgraded for development agility. Accordingly, we architect an execution-transaction-storage three-layer cloud-native database. By connecting to TaaS, 1) the AP engines can be empowered with ACID TP capability, 2) multiple standalone TP engine instances can be incorporated to support multi-master distributed TP for horizontal scalability, 3) multiple execution engines with different data models can be integrated to support multi-model transactions, and 4) high performance TP is achieved through extensive TaaS optimizations and consistent evolution. Cloud-native databases deserve better architecture: we believe that TaaS provides a path forward to better cloud-native databases

    BERT4ETH: A Pre-trained Transformer for Ethereum Fraud Detection

    Full text link
    As various forms of fraud proliferate on Ethereum, it is imperative to safeguard against these malicious activities to protect susceptible users from being victimized. While current studies solely rely on graph-based fraud detection approaches, it is argued that they may not be well-suited for dealing with highly repetitive, skew-distributed and heterogeneous Ethereum transactions. To address these challenges, we propose BERT4ETH, a universal pre-trained Transformer encoder that serves as an account representation extractor for detecting various fraud behaviors on Ethereum. BERT4ETH features the superior modeling capability of Transformer to capture the dynamic sequential patterns inherent in Ethereum transactions, and addresses the challenges of pre-training a BERT model for Ethereum with three practical and effective strategies, namely repetitiveness reduction, skew alleviation and heterogeneity modeling. Our empirical evaluation demonstrates that BERT4ETH outperforms state-of-the-art methods with significant enhancements in terms of the phishing account detection and de-anonymization tasks. The code for BERT4ETH is available at: https://github.com/git-disl/BERT4ETH.Comment: the Web conference (WWW) 202

    Spiral Complete Coverage Path Planning Based on Conformal Slit Mapping in Multi-connected Domains

    Full text link
    Generating a smooth and shorter spiral complete coverage path in a multi-connected domain is an important research area in robotic cavity machining. Traditional spiral path planning methods in multi-connected domains involve a subregion division procedure; a deformed spiral path is incorporated within each subregion, and these paths within the subregions are interconnected with bridges. In intricate domains with abundant voids and irregular boundaries, the added subregion boundaries increase the path avoidance requirements. This results in excessive bridging and necessitates longer uneven-density spirals to achieve complete subregion coverage. Considering that conformal slit mapping can transform multi-connected regions into regular disks or annuluses without subregion division, this paper presents a novel spiral complete coverage path planning method by conformal slit mapping. Firstly, a slit mapping calculation technique is proposed for segmented cubic spline boundaries with corners. Then, a spiral path spacing control method is developed based on the maximum inscribed circle radius between adjacent conformal slit mapping iso-parameters. Lastly, the spiral path is derived by offsetting iso-parameters. The complexity and applicability of the proposed method are comprehensively analyzed across various boundary scenarios. Meanwhile, two cavities milling experiments are conducted to compare the new method with conventional spiral complete coverage path methods. The comparation indicate that the new path meets the requirement for complete coverage in cavity machining while reducing path length and machining time by 12.70% and 12.34%, respectively.Comment: This article has not been formally published yet and may undergo minor content change

    Segmented Learning for Class-of-Service Network Traffic Classification

    Full text link
    Class-of-service (CoS) network traffic classification (NTC) classifies a group of similar traffic applications. The CoS classification is advantageous in resource scheduling for Internet service providers and avoids the necessity of remodelling. Our goal is to find a robust, lightweight, and fast-converging CoS classifier that uses fewer data in modelling and does not require specialized tools in feature extraction. The commonality of statistical features among the network flow segments motivates us to propose novel segmented learning that includes essential vector representation and a simple-segment method of classification. We represent the segmented traffic in the vector form using the EVR. Then, the segmented traffic is modelled for classification using random forest. Our solution's success relies on finding the optimal segment size and a minimum number of segments required in modelling. The solution is validated on multiple datasets for various CoS services, including virtual reality (VR). Significant findings of the research work are i) Synchronous services that require acknowledgment and request to continue communication are classified with 99% accuracy, ii) Initial 1,000 packets in any session are good enough to model a CoS traffic for promising results, and we therefore can quickly deploy a CoS classifier, and iii) Test results remain consistent even when trained on one dataset and tested on a different dataset. In summary, our solution is the first to propose segmentation learning NTC that uses fewer features to classify most CoS traffic with an accuracy of 99%. The implementation of our solution is available on GitHub.Comment: The paper is accepted to be appeared in IEEE GLOBECOM 202

    Energy-Efficient Message Bundling with Delay and Synchronization Constraints in Wireless Sensor Networks

    Get PDF
    In a wireless sensor network (WSN), reducing the energy consumption of battery-powered sensor nodes is key to extending their operating duration before battery replacement is required. Message bundling can save on the energy consumption of sensor nodes by reducing the number of message transmissions. However, bundling a large number of messages could increase not only the end-to-end delays and message transmission intervals, but also the packet error rate (PER). End-to-end delays are critical in delay-sensitive applications, such as factory monitoring and disaster prevention. Message transmission intervals affect time synchronization accuracy when bundling includes synchronization messages, while an increased PER results in more message retransmissions and, thereby, consumes more energy. To address these issues, this paper proposes an optimal message bundling scheme based on an objective function for the total energy consumption of a WSN, which also takes into account the effects of packet retransmissions and, thereby, strikes the optimal balance between the number of bundled messages and the number of retransmissions given a link quality. The proposed optimal bundling is formulated as an integer nonlinear programming problem and solved using a self-adaptive global-best harmony search (SGHS) algorithm. The experimental results, based on the Cooja emulator of Contiki-NG, demonstrate that the proposed optimal bundling scheme saves up to 51.8% and 8.8% of the total energy consumption with respect to the baseline of no bundling and the state-of-the-art integer linear programming model, respectively
    corecore